kwabena boahen on a computer that works like the brain

I got my first computer when I was a teenager growing up in Accra, and it was a really cool device. You could play games with it. You could program it in BASIC. And I was fascinated. So I went into the library to figure out how did this thing work. I read about how the CPU is constantly shuffling data back and forth between the memory, the RAM and the ALU, the arithmetic and logic unit. And I thought to myself, this CPU really has to work like crazy just to keep all this data moving through the system.

But nobody was really worried about this. When computers were first introduced, they were said to be a million times faster than neurons. People were really excited. They thought they would soon outstrip the capacity of the brain. This is a quote, actually, from Alan Turing: "In 30 years, it will be as easy to ask a computer a question as to ask a person." This was in 1946. And now, in 2007, it's still not true. And so, the question is, why aren't we really seeing this kind of power in computers that we see in the brain?

What people didn't realize, and I'm just beginning to realize right now, is that we pay a huge price for the speed that we claim is a big advantage of these computers. Let's take a look at some numbers. This is Blue Gene, the fastest computer in the world. It's got 120,000 processors; they can basically process 10 quadrillion bits of information per second. That's 10 to the sixteenth. And they consume one and a half megawatts of power. So that would be really great, if you could add that to the production capacity in Tanzania. It would really boost the economy. Just to go back to the States, if you translate the amount of power or electricity this computer uses to the amount of households in the States, you get 1,200 households in the U.S. That's how much power this computer uses.

Now, let's compare this with the brain. This is a picture of, actually Rory Sayres' girlfriend's brain. Rory is a graduate student at Stanford. He studies the brain using MRI, and he claims that this is the most beautiful brain that he has ever scanned. (Laughter) So that's true love, right there. Now, how much computation does the brain do? I estimate 10 to the 16 bits per second, which is actually about very similar to what Blue Gene does. So that's the question. The question is, how much—they are doing a similar amount of processing, similar amount of data—the question is how much energy or electricity does the brain use? And it's actually as much as your laptop computer: it's just 10 watts. So what we are doing right now with computers with the energy consumed by 1,200 houses, the brain is doing with the energy consumed by your laptop.

So the question is, how is the brain able to achieve this kind of efficiency? And let me just summarize. So the bottom line: the brain processes information using 100,000 times less energy than we do right now with this computer technology that we have. How is the brain able to do this? Let's just take a look about how the brain works, and then I'll compare that with how computers work. So, this clip is from the PBS series, "The Secret Life of the Brain." It shows you these cells that process information. They are called neurons. They send little pulses of electricity down their processes to each other, and where they contact each other, those little pulses of electricity can jump from one neuron to the other. That process is called a synapse. You've got this huge network of cells interacting with each other—about 100 million of them, sending about 10 quadrillion of these pulses around every second. And that's basically what's going on in your brain right now as you're watching this.

How does that compare with the way computers work? In the computer, you have all the data going through the central processing unit, and any piece of data basically has to go through that bottleneck, whereas in the brain, what you have is these neurons, and the data just really flows through a network of connections among the neurons. There's no bottleneck here. It's really a network in the literal sense of the word. The net is doing the work in the brain. If you just look at these two pictures, these kind of words pop into your mind. This is serial and it's rigid—it's like cars on a freeway, everything has to happen in lockstep—whereas this is parallel and it's fluid. Information processing is very dynamic and adaptive.

So I'm not the first to figure this out. This is a quote from Brian Eno: "the problem with computers is that there is not enough Africa in them." (Laughter) Brian actually said this in 1995. And nobody was listening then, but now people are beginning to listen because there's a pressing, technological problem that we face. And I'll just take you through that a little bit in the next few slides.

This is—it's actually really this remarkable convergence between the devices that we use to compute in computers, and the devices that our brains use to compute. The devices that computers use are what's called a transistor. This electrode here, called the gate, controls the flow of current from the source to the drain—these two electrodes. And that current, electrical current, is carried by electrons, just like in your house and so on. And what you have here is, when you actually turn on the gate, you get an increase in the amount of current, and you get a steady flow of current. And when you turn off the gate, there's no current flowing through the device. Your computer uses this presence of current to represent a one, and the absence of current to represent a zero.

Now, what's happening is that as transistors are getting smaller and smaller and smaller, they no longer behave like this. In fact, they are starting to behave like the device that neurons use to compute, which is called an ion channel. And this is a little protein molecule. I mean, neurons have thousands of these. And it sits in the membrane of the cell and it's got a pore in it. And these are individual potassium ions that are flowing through that pore. Now, this pore can open and close. But, when it's open, because these ions have to line up and flow through, one at a time, you get a kind of sporadic, not steady—it's a sporadic flow of current. And even when you close the pore—which neurons can do, they can open and close these pores to generate electrical activity—even when it's closed, because these ions are so small, they can actually sneak through, a few can sneak through at a time. So, what you have is that when the pore is open, you get some current sometimes. These are your ones, but you've got a few zeros thrown in. And when it's closed, you have a zero, but you have a few ones thrown in.

Now, this is starting to happen in transistors. And the reason why that's happening is that, right now, in 2007—the technology that we are using—a transistor is big enough that several electrons can flow through the channel simultaneously, side by side. In fact, there's about 12 electrons can all be flowing this way. And that means that a transistor corresponds to about 12 ion channels in parallel. Now, in a few years time, by 2015, we will shrink transistors so much. This is what Intel does to keep adding more cores onto the chip. Or your memory sticks that you have now can carry one gigabyte of stuff on them—before, it was 256. Transistors are getting smaller to allow this to happen, and technology has really benefitted from that.

But what's happening now is that in 2015, the transistor is going to become so small, that it corresponds to only one electron at a time can flow through that channel, and that corresponds to a single ion channel. And you start having the same kind of traffic jams that you have in the ion channel. The current will turn on and off at random, even when it's supposed to be on. And that means your computer is going to get its ones and zeros mixed up, and that's going to crash your machine.

So, we are at the stage where we don't really know how to compute with these kinds of devices. And the only kind of thing—the only thing we know right now that can compute with these kinds of devices are the brain.

OK, so a computer picks a specific item of data from memory, it sends it into the processor or the ALU, and then it puts the result back into memory. That's the red path that's highlighted. The way brains work, I told you all, you have got all these neurons. And the way they represent information is they break up that data into little pieces that are represented by pulses and different neurons. So you have all these pieces of data distributed throughout the network. And then the way that you process that data to get a result is that you translate this pattern of activity into a new pattern of activity, just by it flowing through the network. So you set up these connections such that the input pattern just flows and generates the output pattern.

What you see here is that there's these redundant connections. So if this piece of data or this piece of the data gets clobbered, it doesn't show up over here, these two pieces can activate the missing part with these redundant connections. So even when you go to these crappy devices where sometimes you want a one and you get a zero, and it doesn't show up, there's redundancy in the network that can actually recover the missing information. It makes the brain inherently robust. What you have here is a system where you store data locally. And it's brittle, because each of these steps has to be flawless, otherwise you lose that data, whereas in the brain, you have a system that stores data in a distributed way, and it's robust.

What I want to basically talk about is my dream, which is to build a computer that works like the brain. This is something that we've been working on for the last couple of years. And I'm going to show you a system that we designed to model the retina, which is a piece of brain that lines the inside of your eyeball. We didn't do this by actually writing code, like you do in a computer. In fact, the processing that happens in that little piece of brain is very similar to the kind of processing that computers do when they stream video over the Internet. They want to compress the information—they just want to send the changes, what's new in the image, and so on—and that is how your eyeball is able to squeeze all that information down to your optic nerve, to send to the rest of the brain.

Instead of doing this in software, or doing those kinds of algorithms, we went and talked to neurobiologists who have actually reverse engineered that piece of brain that's called the retina. And they figured out all the different cells, and they figured out the network, and we just took that network and we used it as the blueprint for the design of a silicon chip. So now the neurons are represented by little nodes or circuits on the chip, and the connections among the neurons are represented, actually modeled by transistors. And these transistors are behaving essentially just like ion channels behave in the brain. It will give you the same kind of robust architecture that I described.

Here is actually what our artificial eye looks like. The retina chip that we designed sits behind this lens here. And the chip—I'm going to show you a video that the silicon retina put out of its output when it was looking at Kareem Zaghloul, who's the student who designed this chip. Let me explain what you're going to see, OK, because it's putting out different kinds of information, it's not as straightforward as a camera. The retina chip extracts four different kinds of information. It extracts regions with dark contrast, which will show up on the video as red. And it extracts regions with white or light contrast, which will show up on the video as green.

This is Kareem's dark eyes and that's the white background that you see here. And then it also extracts movement. When Kareem moves his head to the right, you will see this blue activity there; it represents regions where the contrast is increasing in the image, that's where it's going from dark to light. And you also see this yellow activity, which represents regions where contrast is decreasing; it's going from light to dark. And these four types of information—your optic nerve has about a million fibers in it, and 900,000 of those fibers send these four types of information. So we are really duplicating the kind of signals that you have on the optic nerve.

What you notice here is that these snapshots taken from the output of the retina chip are very sparse, right? It doesn't light up green everywhere in the background, only on the edges, and then in the hair, and so on. And this is the same thing you see when people compress video to send: they want to make it very sparse, because that file is smaller. And this is what the retina is doing, and it's doing it just with the circuitry, and how this network of neurons that are interacting in there, which we've captured on the chip.

But the point that I want to make—I'll show you up here. So this image here is going to look like these ones, but here I'll show you that we can reconstruct the image, so, you know, you can almost recognize Kareem in that top part there. And so, here you go. Yes, so that's the idea. When you stand still, you just see the light and dark contrasts. But when it's moving back and forth, the retina picks up these changes. And that's why, you know, when you're sitting here and something happens in your background, you merely move your eyes to it. There are these cells that detect change and you move your attention to it. So those are very important for catching somebody who's trying to sneak up on you.

Let me just end by saying that this is what happens when you put Africa in a piano, OK. This is a steel drum here that has been modified, and that's what happens when you put Africa in a piano. And what I would like us to do is put Africa in the computer, and come up with a new kind of computer that will generate thought, imagination, be creative and things like that. Thank you. (Applause)

Chris Anderson: Question for you, Kwabena. Do you put together in your mind the work you're doing, the future of Africa, this conference—what connections can we make, if any, between them?

Kwabena Boahen: Yes, like I said at the beginning, I got my first computer when I was a teenager, growing up in Accra. And I had this gut reaction that this was the wrong way to do it. It was very brute force; it was very inelegant. I don't think that I would've had that reaction, if I'd grown up reading all this science fiction, hearing about RD2D2, whatever it was called, and just—you know, buying into this hype about computers. I was coming at it from a different perspective, where I was bringing that different perspective to bear on the problem. And I think a lot of people in Africa have this different perspective, and I think that's going to impact technology. And that's going to impact how it's going to evolve. And I think you're going to be able to see, use that infusion, to come up with new things, because you're coming from a different perspective. I think we can contribute. We can dream like everybody else.

CA: Thanks Kwabena, that was really interesting. Thank you.

(Applause)